16 research outputs found

    Continuous Interaction with a Virtual Human

    Get PDF
    Attentive Speaking and Active Listening require that a Virtual Human be capable of simultaneous perception/interpretation and production of communicative behavior. A Virtual Human should be able to signal its attitude and attention while it is listening to its interaction partner, and be able to attend to its interaction partner while it is speaking – and modify its communicative behavior on-the-fly based on what it perceives from its partner. This report presents the results of a four week summer project that was part of eNTERFACE’10. The project resulted in progress on several aspects of continuous interaction such as scheduling and interrupting multimodal behavior, automatic classification of listener responses, generation of response eliciting behavior, and models for appropriate reactions to listener responses. A pilot user study was conducted with ten participants. In addition, the project yielded a number of deliverables that are released for public access

    The Corpus of Interactional Data: a Large Multimodal Annotated Resource

    No full text
    International audienc

    System Guidelines for Co-located, Collaborative Work on a Tabletop Display

    No full text
    Collaborative interactions with many existing digital tabletop systems lack the fluidity of collaborating around a table using traditional media. This paper presents a critical analysis of the current state-of-the-art in digital tabletop systems research, targeted at discovering how user requirements for collaboration are currently being met and uncovering areas requiring further development. By considering research on tabletop displays, collaboration, and communication, several design guidelines for effective colocated collaboration around a tabletop display emerged. These guidelines suggest that technology must support: (1) natural interpersonal interaction, (2) transitions between activities, (3) transitions between personal and group work, (4) transitions between tabletop collaboration and external work, (5) the use of physical objects, (6) accessing shared physical and digital objects, (7) flexible user arrangements, and (8) simultaneous user interactions. The critical analysis also revealed several important directions for future research, including: standardization of methods to evaluate co-located collaboration; comparative studies to determine the impact of existing system configurations on collaboration; and creation of a taxonomy of collaborative tasks to help determine which tasks and activities are suitable for tabletop collaboration

    The Rovereto Emotion and Cooperation Corpus: a new resource to investigate cooperation and emotions

    No full text
    The Rovereto Emotion and Cooperation Corpus (RECC) is a new resource collected to investigate the relationship between cooperation and emotions in an interactive setting. Previous attempts at collecting corpora to study emotions have shown that this data are often quite difficult to classify and analyse, and coding schemes to analyse emotions are often found not to be reliable. We collected a corpus of task-oriented (MapTask-style) dialogues in Italian, in which the segments of emotional interest are identified using psycho-physiological indexes (Heart Rate and Galvanic Skin Conductance) which are highly reliable. We then annotated these segments in accordance with novel multimodal annotation schemes for cooperation (in terms of effort) and facial expressions (an indicator of emotional state). High agreement was obtained among coders on all the features. The RECC corpus is to our knowledge the first resource with psycho-physiological data aligned with verbal and nonverbal behaviour data. © 2011 Springer Science+Business Media B.V

    Localization and Grading of Building Roof Damages in High-Resolution Aerial Images

    No full text
    According to the United States National Centers for Environmental Information (NCEI), 2017 was one of the most expensive year of losses due to numerous weather and climate disaster events. To reduce the expenditures handling insurance claims and interactive adjustment of losses, automatic methods analyzing post-disaster images of large areas are increasingly being employed. In our work, roof damage analysis was carried out from high-resolution aerial images captured after a devastating hurricane. We compared the performance of a conventional (Random Forest) classifier, which operates on superpixels and relies on sophisticated, hand-crafted features, with two Convolutional Neural Networks (CNN) for semantic image segmentation, namely, SegNet and DeepLabV3+. The results vary greatly, depending on the complexity of the roof shapes. In case of homogeneous shapes, the results of all three methods are comparable and promising. For complex roof structures the results show that the CNN based approaches perform slightly better than the conventional classifier; the performance of the latter one is, however, most predictable depending on the amount of training data and most successful in the case this amount is low. On the building level, all three classifiers perform comparable well. However, an important prerequisite for accurate damage grading of each roof is its correct delineation. To achieve it, a procedure on multi-modal registration has been developed and summarized in this work. It allows adjusting freely available GIS data with actual image data and it showed a robust performance even in case of severely destroyed buildings
    corecore